Schmidt-Hieber J - Nonparametric regression using deep neural networks with ReLU activation function

Table of Contents

URL: https://arxiv.org/abs/1708.06633

Aim

  • The main theorem states that a network estimator is minimax rate optimal (up to log factors) if and only if the method almost minimizes the empirical risk

New Terms

  • Minimax Estimator: δM , is an estimator which performs best in the worst possible case allowed in the problem.
\begin{equation} \sup_{\theta \in \Theta} R(\theta,\delta^M) = \inf_\delta \sup_{\theta \in \Theta} R(\theta,\delta). \, \end{equation}
  • TODO Wavelet Series Estimators:

Key Results

  • Optimal Estimation Rates for ML NNs with ReLU activation.
  • Wavelet estimators can only achieve suboptimal rates under the composition assumption

Limitations

  • Results limited to ReLU activate MultiLayer Feed Forward Neural Networks

Author: Sharan Yalburgi

Created: 2020-02-05 Wed 17:18

Validate